National Repository of Grey Literature 30 records found  1 - 10nextend  jump to record: Search took 0.00 seconds. 
Compression and Quality Assessment of ECG Signals
Němcová, Andrea ; Tkacz,, Professor Ewaryst (referee) ; Kudrna,, Petr (referee) ; Vítek, Martin (advisor)
Ztrátová komprese signálů EKG je užitečná a v současnosti stále se rozvíjející oblast. Stále se vyvíjí nové a nové kompresní algoritmy. V této oblasti ale chybí standardy pro hodnocení kvality signálu po kompresi. Existuje tedy sice mnoho různých kompresních algoritmů, které ale buď nelze objektivně porovnat vůbec, nebo jen zhruba. V oblasti komprese navíc nikde není popsáno, zda mají na výkon kompresních algoritmů vliv patologie, popřípadě jaký. Tato dizertační práce poskytuje přehled všech nalezených metod pro hodnocení kvality signálů EKG po kompresi. Navíc bylo vytvořeno 10 nových metod. V rámci práce byla provedena analýza všech těchto metod a na základě jejích výsledků bylo doporučeno 12 metod vhodných pro hodnocení kvality signálu EKG po kompresi. Také je zde představen nový kompresní algoritmus „Single-Cycle Fractal-Based (SCyF)“. Algoritmus SCyF je inspirován metodou založenou na fraktálech a využívá jednoho cyklu signálu EKG jako domény. Algoritmus SCyF byl testován na čtyřech různých databázích, přičemž kvalita signálů po kompresi byla vyhodnocena 12 doporučenými metodami. Výsledky byly porovnány s velmi populárním kompresním algoritmem založeným na vlnkové transformaci, který využívá metodu „Set Partitioning in Hierarchical Trees (SPIHT)“. Postup testování zároveň slouží jako příklad, jak by měl vypadat standard hodnocení výkonu kompresních algoritmů. Dále bylo statisticky prokázáno, že existuje rozdíl mezi kompresí fyziologických a patologických signálů. Patologické signály byly komprimovány s nižší efektivitou a kvalitou než signály fyziologické.
Effect of Noise on Image Compression
Pavlík, Jiří ; Svoboda, Pavel (referee) ; Bařina, David (advisor)
This thesis examines effect of different types of noise on performance of significant image compression formats. Three different types of noise are examined, Gaussian noise, Gaussian noise with bigger granularity and shot noise. Two older loosy image compression formats JPEG and JPEG 2000 are compared with newer format WebP. This examination is based on results of experiments with images, which are intentionally degraded by noise, then compressed to examined formats and compared with result of experiments for original images without noise. The effect of noise on performance of comparised formats is based on image quality metric SSIM. From results of experiments, Gaussian noise with bigger granularity seems to be the least distruptive from all types of noises. The other two types of noise have much more dwindling effect on quality of images.
Analysis of JPEG 2000 Settings
Vrtělová, Lucie ; Klíma, Ondřej (referee) ; Bařina, David (advisor)
This thesis is focused on analysis of JPEG 2000 settings. The aim of this thesis is to explore and compare each settings and different implementation. The analysis was done with two libraries, which implements JPEG 2000 - Kakadu and OpenJPEG. Data was processed by framework written in Python3. Created solution provides comprehensive view on using each settings of JPEG 2000 on specific data. Results of this thesis allow faster using of JPEG 2000 on specific data.
Memory Reduction of Stateful Network Traffic Processing
Hlaváček, Martin ; Puš, Viktor (referee) ; Kořenek, Jan (advisor)
This master thesis deals with the problems of memory reduction in the stateful network traffic processing. Its goal is to explore new possibilities of memory reduction during network processing. As an introduction this thesis provides motivation and reasons for need to search new method for the memory reduction. In the following part there are theoretical analyses of NetFlow technology and two basic methods which can in principle reduce memory demands of stateful processing. Later on, there is described the design and implementation of solution which contains the application of these two methods to NetFlow architecture. The final part of this work summarizes the main properties of this solution during interaction with real data.
Compression of IP Flow Records
Kaščák, Andrej ; Kajan, Michal (referee) ; Žádník, Martin (advisor)
My Master's thesis deals with the problems of flow compression in network devices. Its outcome should alleviate memory consumption of the flows and simplify the processing of network traffic. As an introduction I provide a description of protocols serving for data storage and manipulation, followed by discussion about possibilities of compression methods that are employed nowadays. In the following part there is an in-depth analysis of source data that shows the structure and composition of the data and brings up useful observations, which are later used in the testing  of existing compression methods, as well as about their potential and utilization in flow compression. Later on, I venture into the field of lossy compression and basing on the test results a new approach is described, created by means of flow clustering and their subsequent lossy compression. The conclusion contains an evaluation of the possibilities of the method and the final summary of the thesis along with various suggestions for further development of the research.
Error Resilience Analysis for JPEG 2000
Kovalčík, Marek ; Klíma, Ondřej (referee) ; Bařina, David (advisor)
The aim of this thesis is to analyze modern image compression format of JPEG 2000. It analyzes the effect of error resilience mechanisms on image compression with different settings. The impact of using tag embedding to help repair damaged images or using compression modes to improve error resilience is examined here. Quality is evaluated by the PSNR metric that detects the similarity of compressed and reference file. Adding certain tags to the data stream or using certain compression modes should help secure the JPEG 2000 file against image reconstruction damage. To test this hipothesis, there was created a model that acidentally damage the compressed file and evaluate decompressed images. The Kakadu library, which provides efficient work with the JPEG 2000 format, is used for the work. The experimental data set consists of various photographs in uncompressed PPM format in smaller but also in higher resolutions. The result of this work is to find out which compression settings to use for which group of images to make the compression efficient and secure the best. The end of this thesis is devoted to comparison of error resilience of JPEG 2000 and CCSDS 122.0.
Image Compression Using the Wavelet Transform
Kaše, David ; Klíma, Ondřej (referee) ; Bařina, David (advisor)
This thesis deals with image compression using wavelet, contourlet and shearlet transformation. It starts with quick look at image compression problem a quality measurement. Next are presented basic concepts of wavelets, multiresolution analysis and scaling function and detailed look at each transform. Representatives of algorithms for coeficients coding are EZW, SPIHT and marginally EBCOT. In second part is described design and implementation of constructed library. Last part compare result of transforms with format JPEG 2000. Comparison resulted in determining type of image in which implemented contourlet and shearlet transform were more effective than wavelet. Format JPEG 2000 was not exceeded.
Image quality analysis using Fourier transform
Tkadlecová, Markéta ; Druckmüller, Miloslav (referee) ; Hoderová, Jana (advisor)
This thesis deals with two-dimensional Fourier transform and its use in digital image quality assessment. An algorithm based on amplitude spectra is introduced and tested on different sets of images. Its possible use and disadvantages are described. For understanding the basics of the algorithm the fundamental mathematical theory is included. Mainly the properties of amplitude spectra are explained. Next, the theory of digital images and their characteristics, which can affect their quality, is described. For testing the quality of the image the demonstrative program has been developed.
Lossy Light Field Compression
Dlabaja, Drahomír ; Milet, Tomáš (referee) ; Bařina, David (advisor)
The aim of this paper is to propose, implement and evaluate a new lossy compression method for light field images. The proposed method extends the JPEG method to four dimensions and brings new ideas which lead to better compression performance. Correlation between light field views is exploited in both dimensions by four-dimensional discrete cosine transformation. The lossy part of the encoding is performed by quantization, similarly to the JPEG method. The proposed method is implemented as a program library in a C++ language. This paper compares the proposed method to JPEG, JPEG 2000 and HEVC intra image compression methods and HEVC video compression method. The results show that the proposed method outperforms the reference methods with images with a higher amount of views. HEVC video method is better for images with fewer views or for very low bitrates.
Advanced objective measurement criteria applied to image compression
Šimek, Josef ; Průša, Zdeněk (referee) ; Malý, Jan (advisor)
This diploma thesis deals with the problem of using an objective quality assessment methods in image data compression. Lossy compression always introduces some kind of distortion into the processed data causing degradation in the quality of the image. The intensity of this distortion can be measured using subjective or objective methods. To be able to optimize compression algorithms the objective criteria are used. In this work the SSIM index as a useful tool for describing the quality of compressed images has been presented. Lossy compression scheme is realized using the wavelet transform and SPIHT algorithm. The modification of this algorithm using partitioning of the wavelet coefficients into the separate tree-preserving blocks followed by independent coding, which is especially suitable for parallel processing, was implemented. For the given compression ratio the traditional problem is being solved – how to allocate available bits among the spatial blocks to achieve the highest possible image quality. The possible approaches to achieve this solution were discussed. As a result, some methods for bit allocation based on MSSIM index were proposed. To test the effectivity of these methods the MATLAB environment was used.

National Repository of Grey Literature : 30 records found   1 - 10nextend  jump to record:
Interested in being notified about new results for this query?
Subscribe to the RSS feed.